In the early days of web development, it was the server that generated web pages and the browser just displayed them. Now the task of generating web pages is split between the server and the client(browser), and there are a lot of discussions on the web what should be done on the server and what on the client; and a big part of it is communication between the client and the server. My experience was more on the server side, but I have done a fair share of the client side development too.
Web scraping, which I have done a fair amount of, is not exactly web development, more like analyzing website and extracting data from it; but there is an intersection of technical skills with web development that justifies putting it here instead of creating a separate blog entry. And the more complicated and dynamic the client site is, the more difficult the scraping is.
Below are some of the libraries and tools that I use:
Django, Flask, Redis, RQ, Celery, RabbitMQ, Docker, Heroku, Netlify, Pelican, HTML, CSS, JavaScript, jQuery, AJAX, Bootstrap, reStructuredText, REST, Beautiful Soup, Jsoup, Selenium, XML, JSON, YAML. JBoss, Tomcat, Glassfish.
Comments
comments powered by Disqus